36 research outputs found

    Data Service Platform for Sentinel-2 Surface Reflectance and Value-Added Products: System Use and Examples

    Get PDF
    This technical note presents the first Sentinel-2 data service platform for obtaining atmospherically-corrected images and generating the corresponding value-added products for any land surface on Earth. Using the European Space Agency’s (ESA) Sen2Cor algorithm, the platform processes ESA’s Level-1C top-of-atmosphere reflectance to atmospherically-corrected bottom-of-atmosphere (BoA) reflectance (Level-2A). The processing runs on-demand, with a global coverage, on the Earth Observation Data Centre (EODC), which is a public-private collaborative IT infrastructure in Vienna (Austria) for archiving, processing, and distributing Earth observation (EO) data. Using the data service platform, users can submit processing requests and access the results via a user-friendly web page or using a dedicated application programming interface (API). Building on the processed Level-2A data, the platform also creates value-added products with a particular focus on agricultural vegetation monitoring, such as leaf area index (LAI) and broadband hemispherical-directional reflectance factor (HDRF). An analysis of the performance of the data service platform, along with processing capacity, is presented. Some preliminary consistency checks of the algorithm implementation are included to demonstrate the expected product quality. In particular, Sentinel-2 data were compared to atmospherically-corrected Landsat-8 data for six test sites achieving a R2 = 0.90 and Root Mean Square Error (RMSE) = 0.031. LAI was validated for one test site using ground estimations. Results show a very good agreement (R2 = 0.83) and a RMSE of 0.32 m2/m2 (12% of mean value)

    Cloud cover assessment for operational crop monitoring systems in tropical areas.

    Get PDF
    Abstract: The potential of optical remote sensing data to identify, map and monitor croplands is well recognized. However, clouds strongly limit the usefulness of optical imagery for these applications. This paper aims at assessing cloud cover conditions over four states in the tropical and sub-tropical Center-South region of Brazil to guide the development of an appropriate agricultural monitoring system based on Landsat-like imagery. Cloudiness was assessed during overlapping four months periods to match the typical length of crop cycles in the study area. The percentage of clear sky occurrence was computed from the 1 km resolution MODIS Cloud Mask product (MOD35) considering 14 years of data between July 2000 and June 2014. Results showed high seasonality of cloud occurrence within the crop year with strong variations across the study area. The maximum seasonality was observed for the two states in the northern part of the study area (i.e., the ones closer to the Equator line), which also presented the lowest averaged values (15%) of clear sky occurrence during the main (summer) cropping period (November to February). In these locations, optical data faces severe constraints for mapping summer crops. On the other hand, relatively favorable conditions were found in the southern part of the study region. In the South, clear sky values of around 45% were found and no signi?cant clear sky seasonality was observed. Results underpin the challenges to implement an operational crop monitoring system based solely on optical remote sensing imagery in tropical and sub-tropical regions, in particular if short-cycle crops have to be monitored during the cloudy summer months. To cope with cloudiness issues, we recommend the use of new systems with higher repetition rates such as Sentinel-2. For local studies, Unmanned Aircraft Vehicles(UAVs) might be used to augment the observing capability. Multi-sensor approaches combining optical and microwave data can be another option. In cases where wall-to-wall maps are not mandatory, statistical sampling approaches might also be a suitable alternative for obtaining useful crop area information

    Automated segmentation parameter selection and classification of urban scenes using open-source software

    No full text

    Comparative analysis of different retrieval methods for mapping grassland leaf area index using airborne imaging spectroscopy

    Get PDF
    Fine scale maps of vegetation biophysical variables are useful status indicators for monitoring and managing national parks and endangered habitats. Here, we assess in a comparative way four different retrieval methods for estimating leaf area index (LAI) in grassland: two radiative transfer model (RTM) inversion methods (one based on look-up-tables (LUT) and one based on predictive equations) and two statistical modelling methods (one partly, the other entirely based on in situ data). For prediction, spectral data were used that had been acquired over Majella National Park in Italy by the airborne hyperspectral HyMap instrument. To assess the performance of the four investigated models, the normalized root mean squared error (nRMSE) and coefficient of determination (R2) between estimates and in situ LAI measurements are reported (n = 41). Using a jackknife approach, we also quantified the accuracy and robustness of empirical models as a function of the size of the available calibration data set. The results of the study demonstrate that the LUT-based RTM inversion yields higher accuracies for LAI estimation (R2 = 0.91, nRMSE = 0.18) as compared to RTM inversions based on predictive equations (R2 = 0.79, nRMSE = 0.38). The two statistical methods yield accuracies similar to the LUT method. However, as expected, the accuracy and robustness of the statistical models decrease when the size of the calibration database is reduced to fewer samples. The results of this study are of interest for the remote sensing community developing improved inversion schemes for spaceborne hyperspectral sensors applicable to different vegetation types. The examples provided in this paper may also serve as illustrations for the drawbacks and advantages of physical and empirical model

    Self-guided segmentation and classification of multi-temporal landsat 8 images for crop type mapping in southeastern Brazil.

    Get PDF
    Abstract: Only well-chosen segmentation parameters ensure optimum results of object-based image analysis (OBIA). Manually defining suitable parameter sets can be a time-consuming approach, not necessarily leading to optimum results; the subjectivity of the manual approach is also obvious. For this reason, in supervised segmentation as proposed by Stefanski et al. (2013) one integrates the segmentation and classification tasks. The segmentation is optimized directly with respect to the subsequent classification. In this contribution, we build on this work and developed a fully autonomous workflow for supervised object-based classification, combining image segmentation and random forest (RF) classification. Starting from a fixed set of randomly selected and manually interpreted training samples, suitable segmentation parameters are automatically identified. A sub-tropical study site located in São Paulo State (Brazil) was used to evaluate the proposed approach. Two multi-temporal Landsat 8 image mosaics were used as input (from August 2013 and January 2014) together with training samples from field visits and VHR (RapidEye) photo-interpretation. Using four test sites of 15 × 15 km2 with manually interpreted crops as independent validation samples, we demonstrate that the approach leads to robust classification results. On these samples (pixel wise, n ? 1 million) an overall accuracy (OA) of 80% could be reached while classifying five classes: sugarcane, soybean, cassava, peanut and others. We found that the overall accuracy obtained from the four test sites was only marginally lower compared to the out-of-bag OA obtained from the training samples. Amongst the five classes, sugarcane and soybean were classified best, while cassava and peanut were often misclassified due to similarity in the spatio-temporal feature space and high within-class variabilities. Interestingly, misclassified pixels were in most cases correctly identified through the RF classification margin, which is produced as a by-product to the classification map
    corecore